Goto

Collaborating Authors

 disentangled counterfactual learning


Disentangled Counterfactual Learning for Physical Audiovisual Commonsense Reasoning Supplementary Material Anonymous Author(s) Affiliation Address email

Neural Information Processing Systems

Moreover, we show more visualization results in experiments. To ensure a fair comparison, we used the fusion and optimization method as same as Latefusion. When k=1, it means that the object's physical properties are only related to itself, while As described in Section 3.1 in our paper, we represent audio Table 2: Performance comparison between our proposed DSE-audio and existing baseline methods. As shown in Table 2, we compare our method with other baseline methods. In Figure 6, we show a few additional examples of clustering using dynamic factors.



Disentangled Counterfactual Learning for Physical Audiovisual Commonsense Reasoning

Neural Information Processing Systems

In this paper, we propose a Disentangled Counterfactual Learning (DCL) approach for physical audiovisual commonsense reasoning. The task aims to infer objects' physics commonsense based on both video and audio input, with the main challenge is how to imitate the reasoning ability of humans. Most of the current methods fail to take full advantage of different characteristics in multi-modal data, and lacking causal reasoning ability in models impedes the progress of implicit physical knowledge inferring. To address these issues, our proposed DCL method decouples videos into static (time-invariant) and dynamic (time-varying) factors in the latent space by the disentangled sequential encoder, which adopts a variational autoencoder (VAE) to maximize the mutual information with a contrastive loss function. Furthermore, we introduce a counterfactual learning module to augment the model's reasoning ability by modeling physical knowledge relationships among different objects under counterfactual intervention. Our proposed method is a plug-and-play module that can be incorporated into any baseline. In experiments, we show that our proposed method improves baseline methods and achieves state-of-the-art performance. Our source code is available at https://github.com/Andy20178/DCL.


Disentangled Counterfactual Learning for Physical Audiovisual Commonsense Reasoning Supplementary Material Anonymous Author(s) Affiliation Address email

Neural Information Processing Systems

Moreover, we show more visualization results in experiments. To ensure a fair comparison, we used the fusion and optimization method as same as Latefusion. When k=1, it means that the object's physical properties are only related to itself, while As described in Section 3.1 in our paper, we represent audio Table 2: Performance comparison between our proposed DSE-audio and existing baseline methods. As shown in Table 2, we compare our method with other baseline methods. In Figure 6, we show a few additional examples of clustering using dynamic factors.


Disentangled Counterfactual Learning for Physical Audiovisual Commonsense Reasoning

Neural Information Processing Systems

In this paper, we propose a Disentangled Counterfactual Learning (DCL) approach for physical audiovisual commonsense reasoning. The task aims to infer objects' physics commonsense based on both video and audio input, with the main challenge is how to imitate the reasoning ability of humans. Most of the current methods fail to take full advantage of different characteristics in multi-modal data, and lacking causal reasoning ability in models impedes the progress of implicit physical knowledge inferring. To address these issues, our proposed DCL method decouples videos into static (time-invariant) and dynamic (time-varying) factors in the latent space by the disentangled sequential encoder, which adopts a variational autoencoder (VAE) to maximize the mutual information with a contrastive loss function. Furthermore, we introduce a counterfactual learning module to augment the model's reasoning ability by modeling physical knowledge relationships among different objects under counterfactual intervention.